Skip to content

fix: prevent duplicate scheduled runs via deployment schedule lock + ID preservation#20921

Draft
devin-ai-integration[bot] wants to merge 7 commits intomainfrom
devin/1772469892-fix-schedule-duplication-on-redeploy
Draft

fix: prevent duplicate scheduled runs via deployment schedule lock + ID preservation#20921
devin-ai-integration[bot] wants to merge 7 commits intomainfrom
devin/1772469892-fix-schedule-duplication-on-redeploy

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Mar 2, 2026

What this PR fixes

This fixes duplicate auto-scheduled flow runs caused by deployment schedule races.

Root cause: when schedules were replaced, schedule IDs changed. Since scheduler idempotency keys include deployment_schedule_id, races between scheduling and deployment updates could produce duplicates.

Approach (short)

  1. Preserve schedule IDs in no-slug schedule updates for PATCH /deployments/{id}.
  2. Preserve schedule IDs in POST /deployments upsert path as well (important for flow.serve / deployment.apply flows).
  3. Add deployment schedule locking for create/update and align lock key usage so POST and PATCH contend on the same identity key.
  4. Fail closed when Redis locking is configured but unavailable (503 instead of silent in-memory fallback).
Locking behavior
  • Lock key for create/upsert and update is aligned to deployment identity: {flow_id}:{deployment_name}.
  • Redis configured (PREFECT_MESSAGING_BROKER contains prefect_redis): use Redis lock with blocking=False, 30s timeout.
  • Non-Redis broker: use per-key in-memory asyncio.Lock with fail-fast 409 behavior.
  • Redis broker configured but Redis client unavailable: return 503 to avoid unsafe process-local fallback in distributed setups.
No-slug schedule reconciliation

For both PATCH updates and POST upserts:

  • Read existing schedules
  • Sort by ID for stable positional matching
  • Update existing rows in place for overlapping positions (preserve IDs)
  • Create rows only for additional incoming schedules
  • Delete extra old rows only when incoming list is shorter

This stabilizes schedule IDs across redeploys and keeps scheduler idempotency keys stable.

Test coverage added/updated

API tests (tests/server/orchestration/api/test_deployments.py):

  • Updated no-slug PATCH multi-schedule test to assert ID preservation.
  • Added test_create_deployment_upsert_preserves_schedule_ids_without_slugs for POST upsert path.
  • Added test_post_and_patch_contend_on_same_schedule_lock_key (cross-route lock contention).
  • Added test_deployment_schedule_lock_fails_closed_when_redis_unavailable (503 behavior).

Integration test (integration-tests/test_schedule_statefulness.py):

  • Added test_schedule_id_stability_for_no_slug_redeploy.
  • Uses ephemeral local API settings to ensure integration coverage runs against branch code and does not depend on external profile/server configuration.
Known limitations / follow-ups
  • Positional matching is heuristic for no-slug updates; explicit reordering can remap which existing ID gets which schedule config.
  • No full real-Redis integration contention test in this PR (current lock tests cover API behavior and error semantics).

Link to Devin Session

Requested by: @zzstoatzz

…cate runs

When a deployment is updated without schedule slugs, the server
previously deleted all existing schedules and recreated them with new
UUIDs. This changed the idempotency key for auto-scheduled flow runs
(which includes the schedule ID), causing the scheduler to create
duplicate runs when a race condition occurred between the scheduler
reading stale schedule IDs and the deployment update creating new ones.

This fix updates existing schedules in place by position rather than
deleting and recreating them, preserving their IDs and keeping the
idempotency keys stable across redeployments.

Co-Authored-By: Nate Nowack <nate@prefect.io>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@codspeed-hq
Copy link

codspeed-hq bot commented Mar 2, 2026

Merging this PR will not alter performance

✅ 2 untouched benchmarks


Comparing devin/1772469892-fix-schedule-duplication-on-redeploy (6616f29) with main (f8874f4)

Open in CodSpeed

devin-ai-integration bot and others added 3 commits March 2, 2026 17:11
The test previously asserted that schedule IDs change after update (the
old delete-and-recreate behavior). Updated to validate the new behavior:
- Schedule IDs are preserved (order-independent set comparison)
- Schedule configs are correctly applied (order-independent lookup)

Also fixed a pre-existing bug where the original assertion compared UUID
objects to strings, which always evaluated to not-equal regardless of
actual ID values.

Co-Authored-By: Nate Nowack <nate@prefect.io>
Adds a distributed lock around create_deployment and update_deployment
API handlers, matching Nebula's approach. Uses Redis lock when
prefect-redis is configured as the messaging broker (HA deployments),
falls back to per-deployment asyncio.Lock for single-server setups.

Combined with the model-layer schedule ID preservation from the
previous commit, this addresses both:
1. Concurrent update races (lock prevents interleaving)
2. Scheduler-vs-update races (preserved IDs keep idempotency keys stable)

Co-Authored-By: Nate Nowack <nate@prefect.io>
Co-Authored-By: Nate Nowack <nate@prefect.io>
@devin-ai-integration devin-ai-integration bot changed the title fix: preserve schedule IDs during deployment updates to prevent duplicate runs fix: prevent duplicate scheduled runs via deployment schedule lock + ID preservation Mar 2, 2026
@zzstoatzz zzstoatzz marked this pull request as ready for review March 4, 2026 19:29
@zzstoatzz zzstoatzz marked this pull request as draft March 4, 2026 19:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant